EduQuest is a hybrid intelligent quiz generation and real-time assessment platform designed to reduce the 3–5 hours educators spend weekly creating quizzes. It combines custom NLP pipelines (TF-IDF, RAKE, LDA, dependency parsing) with optional AI enhancements to extract key concepts from PDFs or topic descriptions. A rule-based system generates multiple-choice questions, and a Random Forest classifier categorizes difficulty with 82% accuracy. Its WebSocket-based framework supports fast, real-time scoring for 200+ concurrent users with sub-200ms latency. Experiments show 80–85% of quiz needs are met via resource-efficient, low-cost methods, reducing reliance on commercial AI APIs while maintaining quality. EduQuest offers a transparent, customizable, and budget-friendly AI-powered educational tool.
Introduction
The text presents EduQuest, a hybrid intelligent quiz generation and real-time assessment platform designed to reduce educator workload while increasing student engagement in digital education. Traditional quiz creation is time-consuming, inconsistent, and poorly aligned with modern interactive learning needs. While large language models enable automated question generation, reliance on commercial APIs raises concerns about cost, transparency, and accessibility—especially for budget-constrained institutions.
EduQuest addresses these issues through a hybrid architecture, where 80–85% of functionality is powered by custom-built NLP pipelines and lightweight machine learning models, with optional AI enhancement. The system uses established NLP techniques such as TF-IDF, RAKE, LDA, dependency parsing, and rule-based linguistic templates to generate multiple-choice questions. A Random Forest–based difficulty classifier assigns Easy/Medium/Hard labels with 82% accuracy using 28 linguistic and cognitive features, including readability metrics and Bloom’s taxonomy levels.
The platform uniquely integrates automated question generation with real-time, gamified assessment delivery using WebSocket-based synchronization. Features include fastest-finger-first scoring, live leaderboards, synchronous and asynchronous quiz modes, and scalable room-based participation. Educators can review and edit generated questions before deployment, ensuring pedagogical control and transparency.
Built on a modern web stack (React, Node.js, MongoDB, spaCy, scikit-learn), EduQuest demonstrates that resource-efficient, open, and customizable systems can deliver intelligent educational tools without heavy dependence on proprietary AI services. Overall, the platform fills key gaps in accessibility, transparency, and workflow integration in digital assessment systems.
Conclusion
EduQuest is a hybrid AI-powered quiz and real-time assessment platform that uses custom NLP, rule-based question generation, and machine learning-based difficulty classification, with commercial APIs as optional enhancements. It achieved 82% difficulty classification accuracy, high-quality questions (4.2/5), 71% higher student engagement, and sub-200ms latency for 200+ concurrent users. By relying on transparent, open-source technologies, EduQuest saves educators 78% of quiz creation time and democratizes AI-powered assessment for budget-constrained institutions. Future work includes adaptive difficulty, multimodal question generation, cross-lingual support, and studies on long-term learning outcomes.
References
[1] OECD, \"TALIS 2018 Results (Volume I): Teachers and School Leaders as Lifelong Learners,\" OECD Publishing, Paris, 2019. DOI: 10.1787/1d0bc92a-en
[2] C. Raffel et al., \"Exploring the limits of transfer learning with a unified text-to-text transformer,\" Journal of Machine Learning Research, vol. 21, no. 140, pp. 1-67, 2020.
[3] S. Rose, D. Engel, N. Cramer, and W. Cowley, \"Automatic keyword extraction from individual documents,\" in Text Mining: Applications and Theory, M. W. Berry and J. Kogan, Eds. Chichester, UK: John Wiley & Sons, 2010, pp. 1-20.
[4] D. M. Blei, A. Y. Ng, and M. I. Jordan, \"Latent Dirichlet allocation,\" Journal of Machine Learning Research, vol. 3, pp. 993-1022, Jan. 2003.
[5] M. Honnibal and I. Montani, \"spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing,\" 2017. [Online]. Available: https://spacy.io
[6] R. Flesch, \"A new readability yardstick,\" Journal of Applied Psychology, vol. 32, no. 3, pp. 221-233, June 1948.
[7] M. Heilman and N. A. Smith, \"Good question! Statistical ranking for question generation,\" in Proc. Human Language Technologies: NAACL, Los Angeles, CA, USA, 2010, pp. 609-617.
[8] X. Du, J. Shao, and C. Cardie, \"Learning to ask:Neural question generation for reading comprehension,\" in Proc. 55th Annual Meeting of the ACL, Vancouver, Canada, 2017, pp. 1342-1352.
[9] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, \"SQuAD: 100,000+ questions for machine comprehension of text,\" in Proc. EMNLP, Austin, TX, USA, 2016, pp. 2383-2392.
[10] L. Benedetto et al., \"Does difficulty matter? A study on difficulty of questions in neural question generation,\" in Proc. Florence, Italy, 2019.
[11] M. Dougiamas and P. Taylor, \"Moodle: Using learning communities to create an open source course management system,\" in Proc. EDMEDIA+ Innovate Learning, Honolulu, HI, USA, 2023, pp. 171-178.
[12] A. Domínguez et al., \"Gamifying learning experiences: Practical implications and outcomes,\" Computers & Education, vol. 63, pp. 380-392, Apr. 2013.
[13] E. Kochmar et al., \"Automated personalized feedback improves learning gains in an intelligent tutoring system,\" in Proc. International Conference on Artificial Intelligence in Education, Online, 2021, pp. 140-146
[14] R. Mihalcea and P. Tarau, \"TextRank: Bringing order into text,\" in Proc. 2004 Conference on EMNLP, Barcelona, Spain, 2004, pp. 404-411.
[15] S. Narayan, C. Gardent, S. B. Cohen, and A. Shimorina, \"Split and rephrase,\" in Proc. 2017 Conference on EMNLP, Copenhagen, Denmark, 2017, pp. 606-616.
[16] N. Duan, D. Tang, P. Chen, and M. Zhou, \"Question generation for question answering,\" in Proc. 2017 Conference on EMNLP, Copenhagen, Denmark, 2017, pp. 866-874.